Avoiding redundant columns by adding classical Benders cuts to column generation subproblems

نویسندگان

چکیده

When solving the linear programming (LP) relaxation of a mixed-integer program (MIP) with column generation, columns might be generated that are not needed to express any integer optimal solution. Such called strongly redundant and dual bound obtained by LP is potentially stronger if these generated. We introduce sufficient condition for strong redundancy can checked compact LP. Using solution this we generate classical Benders cuts subproblem so generation avoided. The potential improve master problem evaluated computationally using an implementation in branch-price-and-cut solver GCG. While their efficacy limited on problems, benefit applying demonstrated structured models which temporal decomposition applied.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximate Linear Programming for Network Control: Column Generation and Subproblems

Approximate linear programming (ALP) has been shown to provide useful bounds on optimal average cost for multiclass queueing network control problems. Approximation architectures can be chosen so that a manageable number of variables gives moderate accuracy; however, the number of constraints grows exponentially in the number of queues. We use column generation to more efficiently solve the dua...

متن کامل

Inexact Cuts in Benders Decomposition

Benders' decomposition is a well-known technique for solving large linear programs with a special structure. In particular it is a popular technique for solving multi-stage stochastic linear programming problems. Early termination in the subproblems generated during Benders' decomposition (assuming dual feasibility) produces valid cuts which are inexact in the sense that they are not as constra...

متن کامل

Regularization by Adding Redundant Features

The Pseudo Fisher Linear Discriminant (PFLD) based on a pseudo-inverse technique shows a peaking behaviour of the generalization error for training sample sizes that are about the feature size: with an increase in the training sample size the generalization error at first decreases reaching the minimum, then increases reaching the maximum at the point where the training sample size is equal to ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Discrete Optimization

سال: 2021

ISSN: ['1873-636X', '1572-5286']

DOI: https://doi.org/10.1016/j.disopt.2021.100626